Data for for this assignment will be available to download from canvas. (Data lab2) For your convenience you can load all the libraries required for this assignment at the beginning of your Rmarkdown.
library(SpatialKDE)
library(sp)
library(sf)
## Linking to GEOS 3.13.0, GDAL 3.8.5, PROJ 9.5.1; sf_use_s2() is TRUE
library(dplyr)
##
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
library(tmap)
library(spatstat)
## Loading required package: spatstat.data
## Loading required package: spatstat.univar
## spatstat.univar 3.1-2
## Loading required package: spatstat.geom
## spatstat.geom 3.3-6
## Loading required package: spatstat.random
## spatstat.random 3.3-3
## Loading required package: spatstat.explore
## Loading required package: nlme
##
## Attaching package: 'nlme'
## The following object is masked from 'package:dplyr':
##
## collapse
## spatstat.explore 3.3-4
## Loading required package: spatstat.model
## Loading required package: rpart
## spatstat.model 3.3-4
## Loading required package: spatstat.linnet
## spatstat.linnet 3.2-5
##
## spatstat 3.3-1
## For an introduction to spatstat, type 'beginner'
You are provided with the 2001 census data by ethnicity of the Auckland isthmus, one at the meshblock level (akCity_mb01_ethnic) and another the census area unit (CAU) (akCity_CAU01_ethnic) level. They are saved in the files on canvas.
use library library(sf) to load akCity_mb01_ethnic shapefile as “Ethnicity_mesh” (you can name this dataset as you wish).
I created for you a variable Percentage of Asian population and it is saved as:pro_Asian and is saved as values from 0 to 1 values so make sure you change it to percentage. Your data may not have projection - if that happens you can specify the projection in tmap as NULL.
create a 5-class (quantile or equal interval) choropleth map showing percentage of Asian population. Remember about map title, north arrow, legend etc.
Ethnicity_mesh<- st_read("~/Downloads/GEOG_DATA2/akCity_mb01_ethnic.shp")
## Reading layer `akCity_mb01_ethnic' from data source
## `/Users/llllllyh/Downloads/GEOG_DATA2/akCity_mb01_ethnic.shp'
## using driver `ESRI Shapefile'
## Simple feature collection with 2990 features and 14 fields
## Geometry type: POLYGON
## Dimension: XY
## Bounding box: xmin: 2658430 ymin: 6469654 xmax: 2679068 ymax: 6483715
## Projected CRS: NZGD49 / New Zealand Map Grid
Ethnicity_mesh$pro_Asian <- Ethnicity_mesh$pro_Asian * 100 # Convert Asian_prop to percentage
map1 <- tm_shape(Ethnicity_mesh) +
tm_fill("pro_Asian",
style = "quantile", # the most fit style should be choose
n = 5, # set up 5-class
palette = "Blues", # choose the colour
title = "Percentage of Asian Population") +
tm_borders() +
tm_layout(title = "Asian Population Distribution in Auckland (2001)", # map title
legend.outside = TRUE,
legend.title.size = 1,
legend.text.size = 0.8, # size
compass.type = "8star", # set the north arrow
inner.margins = c(0.05, 0.02, 0.02, 0.05)) +
tm_compass() +
tm_scale_bar()
##
## ── tmap v3 code detected ───────────────────────────────────────────────────────
## [v3->v4] `tm_fill()`: instead of `style = "quantile"`, use fill.scale =
## `tm_scale_intervals()`.
## ℹ Migrate the argument(s) 'style', 'n', 'palette' (rename to 'values') to
## 'tm_scale_intervals(<HERE>)'
## [v3->v4] `tm_fill()`: migrate the argument(s) related to the legend of the
## visual variable `fill` namely 'title' to 'fill.legend = tm_legend(<HERE>)'
## [v3->v4] `tm_layout()`: use `tm_title()` instead of `tm_layout(title = )`
## ! `tm_scale_bar()` is deprecated. Please use `tm_scalebar()` instead.
map1
## [cols4all] color palettes: use palettes from the R package cols4all. Run
## `cols4all::c4a_gui()` to explore them. The old palette name "Blues" is named
## "brewer.blues"
## Multiple palettes called "blues" found: "brewer.blues", "matplotlib.blues". The first one, "brewer.blues", is returned.
##
## [plot mode] fit legend/component: Some legend items or map compoments do not
## fit well, and are therefore rescaled.
## ℹ Set the tmap option `component.autoscale = FALSE` to disable rescaling.
WRITTEN PART STARTS
The spatial pattern suggests that the Asian population in Auckland was more concentrated in central and southeastern regions in 2001, these areas may correspond to early Asian immigration core areas (such as Mt Roskill, etc.), or locations close to education and business centers (CBD). With lower representation in the western and outlying suburban areas, maybe due to geographical isolation or inconvenient transportation.
WRITTEN PART ENDS
# file reading
Ethnicity2<-read_sf("~/Downloads/GEOG_DATA2/akCity_CAU01_ethnic.shp")
Ethnicity2$Asian_prop <- Ethnicity2$Asian_prop * 100 # Convert Asian_prop to percentage
# Create a 5-class quantile choropleth map
map2 <- tm_shape(Ethnicity2) +
tm_fill("Asian_prop",
style = "quantile",
n = 5, # 5-class
palette = "Blues", # colour set up
title = "Percentage of Asian Population") +
tm_borders() +
tm_layout(title = "Asian Population Distribution by CAU (2001)",
legend.outside = TRUE,
legend.title.size = 1,
legend.text.size = 0.8,
compass.type = "8star",
inner.margins = c(0.05, 0.02, 0.02, 0.05)) +
tm_compass() +
tm_scale_bar()
##
## ── tmap v3 code detected ───────────────────────────────────────────────────────
## [v3->v4] `tm_fill()`: instead of `style = "quantile"`, use fill.scale =
## `tm_scale_intervals()`.
## ℹ Migrate the argument(s) 'style', 'n', 'palette' (rename to 'values') to
## 'tm_scale_intervals(<HERE>)'
## [v3->v4] `tm_fill()`: migrate the argument(s) related to the legend of the
## visual variable `fill` namely 'title' to 'fill.legend = tm_legend(<HERE>)'
## [v3->v4] `tm_layout()`: use `tm_title()` instead of `tm_layout(title = )`
## ! `tm_scale_bar()` is deprecated. Please use `tm_scalebar()` instead.
map2
## [cols4all] color palettes: use palettes from the R package cols4all. Run
## `cols4all::c4a_gui()` to explore them. The old palette name "Blues" is named
## "brewer.blues"
## Multiple palettes called "blues" found: "brewer.blues", "matplotlib.blues". The first one, "brewer.blues", is returned.
##
## [plot mode] fit legend/component: Some legend items or map compoments do not
## fit well, and are therefore rescaled.
## ℹ Set the tmap option `component.autoscale = FALSE` to disable rescaling.
WRITTEN PART STARTS
The spatial pattern in the map shows that the Asian population in Auckland (2001) is unevenly distributed. Higher concentrations (darker blue) are found in central and southeastern areas, while lower concentrations (lighter blue) are more common in western and peripheral suburbs. The distribution suggests that Asian communities were more established in specific urban and suburban regions.
WRITTEN PART ENDS
#If you are using ordinary plots you can use the following commands:
#par(mfrow=c(1,2))
#plot(st_geometry(Ethnicity2))
#plot(st_geometry(Ethnicity_mesh))
# What happens if you change mfrow=c(2,1)?
#par(mfrow=c(2,1))
# If you are using tmap - look into tmap_arrange(map1, map2, ncol=1,nrow=2)
tmap_arrange(map1, map2, ncol = 2, nrow = 1)
## [cols4all] color palettes: use palettes from the R package cols4all. Run
## `cols4all::c4a_gui()` to explore them. The old palette name "Blues" is named
## "brewer.blues"
## Multiple palettes called "blues" found: "brewer.blues", "matplotlib.blues". The first one, "brewer.blues", is returned.
##
## [plot mode] fit legend/component: Some legend items or map compoments do not
## fit well, and are therefore rescaled.
## ℹ Set the tmap option `component.autoscale = FALSE` to disable rescaling.
## [cols4all] color palettes: use palettes from the R package cols4all. Run
## `cols4all::c4a_gui()` to explore them. The old palette name "Blues" is named
## "brewer.blues"
## Multiple palettes called "blues" found: "brewer.blues", "matplotlib.blues". The first one, "brewer.blues", is returned.
##
## [plot mode] fit legend/component: Some legend items or map compoments do not
## fit well, and are therefore rescaled.
## ℹ Set the tmap option `component.autoscale = FALSE` to disable rescaling.
tmap_arrange(map1, map2, ncol = 1, nrow = 2)
## [cols4all] color palettes: use palettes from the R package cols4all. Run
## `cols4all::c4a_gui()` to explore them. The old palette name "Blues" is named
## "brewer.blues"
## Multiple palettes called "blues" found: "brewer.blues", "matplotlib.blues". The first one, "brewer.blues", is returned.
##
## [plot mode] fit legend/component: Some legend items or map compoments do not
## fit well, and are therefore rescaled.
## ℹ Set the tmap option `component.autoscale = FALSE` to disable rescaling.
## [cols4all] color palettes: use palettes from the R package cols4all. Run
## `cols4all::c4a_gui()` to explore them. The old palette name "Blues" is named
## "brewer.blues"
## Multiple palettes called "blues" found: "brewer.blues", "matplotlib.blues". The first one, "brewer.blues", is returned.
##
## [plot mode] fit legend/component: Some legend items or map compoments do not
## fit well, and are therefore rescaled.
## ℹ Set the tmap option `component.autoscale = FALSE` to disable rescaling.
WRITTEN PART
Lowest Values: In the first map, the lowest percentage range is 0.0 to 3.4%, while in the second map, it shifts slightly higher to 3.80 to 7.84%. This suggests that smaller units (second map) reveal more granularity, and some areas previously classified as very low may now show slightly higher concentrations due to disaggregation.
Highest Values: The first map’s highest range is 24.3 to 100.0%, indicating a few large areas with significant Asian populations. The second map’s highest range narrows to 24.13 to 37.51%, showing that extreme values are less pronounced when using smaller units. This implies that the highest concentrations are more localized and not as extreme as the larger units suggested.
Pattern Change: Larger spatial units may have masked internal heterogeneity, making high values appear more widespread. Smaller CAUs reveal that these high concentrations are actually clustered in specific sub-areas.
Relate to MAUP: First map tend to smooth over local variations, exaggerating extremes. Second map provide finer detail, reducing the range of extremes. Also, the shift in low/high value locations reflects how redefining boundaries redistributes population percentages, change the spatial patterns.
WRITTEN PART ENDS
VIDEO TO WATCH To understand k-means clustering please watch the video and if you need to look through the attached document:
Our data have “Ethnicity_mesh CNTRD_X”,“Ethnicity_mesh CNTRD_Y” variables which represent coordinates of the meshblocks’ centroids and these will be used as an input into the clustering.
I chose to cluster the data into 100 clusters (groupings) from the initial 2990 meshblocks in the Auckland area.
to check the number of meshblocks you can use:
#Numbr of meshblocks
length(Ethnicity_mesh$geometry)
## [1] 2990
#using kmeans clustering to create 100 clusters of ethnicity for Auckland and adding the kmeans cluster column to the ethnicity_mesh dataframe
km <- kmeans(cbind(Ethnicity_mesh$CNTRD_X,Ethnicity_mesh$CNTRD_Y), centers = 100)
km2 <- kmeans(cbind(Ethnicity_mesh$CNTRD_X,Ethnicity_mesh$CNTRD_Y), centers = 100,iter.max = 35,nstart = 20) #iter.max set to 35 maximum iterations allowed, if centers is a number 20 random sets should be chosen using nstart
# you can plot the results of km and km2 and see the differences.
par(mfrow=c(1,2))
plot(Ethnicity_mesh$CNTRD_X,Ethnicity_mesh$CNTRD_Y,col=km$cluster,pch=20)
plot(Ethnicity_mesh$CNTRD_X,Ethnicity_mesh$CNTRD_Y,col=km2$cluster,pch=20)
# next you will need to assign the created clusters back to the main dataset. I used km to do so:
Ethnicity_mesh$Cluster<-as.factor(km$cluster)
## by doing so we have added clusters (100 different ones) to our meshblock dataset. In order to see it on a map we need to dissolve our meshblocks into new geometries
num_cols <- sapply(Ethnicity_mesh, is.numeric)
Ethnicity_mesh_numeric <- Ethnicity_mesh[, num_cols]
#dissolve meshblocks by type of Cluster
Ethnicity_new_clusters <- aggregate(Ethnicity_mesh_numeric,, by = list(x=Ethnicity_mesh$Cluster),
FUN=mean, na.rm = TRUE)
## just use the Ethnicity_new_clusters$pro_Asian to make a map.
tm_shape(Ethnicity_new_clusters) +
tm_fill("pro_Asian",
palette = "Blues",
title = "Percentage of Asian Population by Cluster") +
tm_borders() +
tm_layout(title = "Aggregated Asian Population by Clusters") +
tm_compass() +
tm_scale_bar()
##
## ── tmap v3 code detected ───────────────────────────────────────────────────────
## [v3->v4] `tm_tm_fill()`: migrate the argument(s) related to the scale of the
## visual variable `fill` namely 'palette' (rename to 'values') to fill.scale =
## tm_scale(<HERE>).
## [v3->v4] `tm_fill()`: migrate the argument(s) related to the legend of the
## visual variable `fill` namely 'title' to 'fill.legend = tm_legend(<HERE>)'
## [v3->v4] `tm_layout()`: use `tm_title()` instead of `tm_layout(title = )`
## ! `tm_scale_bar()` is deprecated. Please use `tm_scalebar()` instead.
## [cols4all] color palettes: use palettes from the R package cols4all. Run
## `cols4all::c4a_gui()` to explore them. The old palette name "Blues" is named
## "brewer.blues"
## Multiple palettes called "blues" found: "brewer.blues", "matplotlib.blues". The first one, "brewer.blues", is returned.
Ethnicity_mesh$Cluster_km2<-as.factor(km2$cluster)
# assign the created clusters back to the main dataset
num_cols <- sapply(Ethnicity_mesh, is.numeric)
Ethnicity_mesh_numeric <- Ethnicity_mesh[, num_cols] # these two lines can solve the warning
Ethnicity_new_clusters_km2 <- aggregate(Ethnicity_mesh_numeric,
by = list(Cluster = Ethnicity_mesh$Cluster_km2),
FUN = mean, na.rm = TRUE)
# plot
tm_shape(Ethnicity_new_clusters_km2) +
tm_fill("pro_Asian",
palette = "Blues",
title = "Percentage of Asian Population by Cluster for km2") +
tm_borders() +
tm_layout(title = "Aggregated Asian Population by Clusters (km2)") +
tm_compass() +
tm_scale_bar()
##
## ── tmap v3 code detected ───────────────────────────────────────────────────────
## [v3->v4] `tm_tm_fill()`: migrate the argument(s) related to the scale of the
## visual variable `fill` namely 'palette' (rename to 'values') to fill.scale =
## tm_scale(<HERE>).
## [v3->v4] `tm_fill()`: migrate the argument(s) related to the legend of the
## visual variable `fill` namely 'title' to 'fill.legend = tm_legend(<HERE>)'
## [v3->v4] `tm_layout()`: use `tm_title()` instead of `tm_layout(title = )`
## ! `tm_scale_bar()` is deprecated. Please use `tm_scalebar()` instead.
## [cols4all] color palettes: use palettes from the R package cols4all. Run
## `cols4all::c4a_gui()` to explore them. The old palette name "Blues" is named
## "brewer.blues"
## Multiple palettes called "blues" found: "brewer.blues", "matplotlib.blues". The first one, "brewer.blues", is returned.
WRITTEN PART STARTS
High-value areas (20%-40%): It presents a locally concentrated clumped distribution, corresponding to the traditional Asian immigration core area in Auckland. There is a jump distribution (discontinuous) between high-value areas.
Low value zone (0%-10%): It is mainly distributed in the west (Waitakere) and the north (Rodney), which coincides with natural landscape areas or local residential areas. The boundaries of low-value areas are clearer, which means that the clustering algorithm classifies these areas into the same category.
Transition zone (10%-20%): A buffer zone forms between high and low values, which may be a transitional area or a hybrid community for the spread of immigration.
Missing: If there are uncolored areas, data loss may be caused by uninhabited industrial areas, nature reserves, or uncovered edge areas during clustering.
Effect of MAUP: If larger units are used, the high-value zone may be “smoothed” and more extensive; if smaller units are used, local high-value points may be found. MAUP reminds us that patterns on maps are not necessarily real and natural distributions, but may rely on artificially divided boundaries. In other words, if you change the way of dividing areas, the map results may change.
WRITTEN PART ENDS
library(spatstat)
data()
lansing
## Marked planar point pattern: 2251 points
## Multitype, with levels = blackoak, hickory, maple, misc, redoak, whiteoak
## window: rectangle = [0, 1] x [0, 1] units (one unit = 924 feet)
plot(lansing)
plot(split(lansing))
plot(density(split(lansing)))
A few random point patterns can be generated using built-in functions that start with ‘r’ in R. The simplest type of spatial point process is ‘CSR’ or more properly the Poisson process. Complete spatial randomness describes a point process whereby point events occur within a given study area in a completely random fashion:
plot(rpoispp(100))
It is based on a Poisson process with intensity 100. This expression works perfectly well except that its long name has to be repeated every time it is used. This can be simplified by assigning it to a short variable such as x.
x = rpoispp(100)
plot(x)
More informative displays can be produced using:
plot(density(x))
contour(density(x), add=T)
plot(x, add=T)
Other types of point process include rMatermII, and rMatClust. Run the following two commands:
plot(rMaternII(100, 0.1))
plot(rMatClust(100, 0.05, 5))
WRITTEN PART STARTS
rpoispp(100) generates a random point pattern using Poisson process, which is a completely random spatial point process. Points are distributed randomly with no pattern or relationship between them. Includes complete spatial randomness.
rMaternII(100, 0.1) generates a random point patter (Matern Model II inhibition process). This process models points that may be clustered or dispersed, with points having spatial dependence (some points may be closer or farther apart based on random factors).
rMatClust(100, 0.05, 5) generates a random point pattern (Matern Cluster process). This process models points that tend to group together into clusters, with points within a cluster being closer together.
WRITTEN PART ENDS
Data needed for this part are stored in the text file SpeciesMap.txt that contains point data of tree species, one of which is kauri (agathis australis or AGAU). The data can be read into rawdata (or whatever name you prefer) using the following command:
## remember to change the directory
rawdata = read.table("~/Downloads/GEOG_DATA2/SpeciesMap.txt", header=T, sep="\t")
rawdata
_ To inspect the data, enter ‘rawdata’.
WRITTEN PART STARTS
What is the spatial reference system used, local or global? Why do you think so?
Local
Small coordinate range: From the parameters the coordinate values are between 0-40 (X) and 0-50 (Y), and there is no typical range of latitude and longitude or earth coordinates.
Suitable for small-scale research: Data are measured values for experimental samples or specific regions, and local coordinate systems are usually used.
WRITTEN PART ENDS
rawdata$Species
plot(rawdata$x, rawdata$y)
Even though data can look like point patterns we need to inform R that
has a propoer format. A point pattern can be generated using the ppp
(point pattern process) function:
pts <- ppp(rawdata$x, rawdata$y, marks=rawdata$Species, xrange=c(0,40), yrange=c(0,50))
## Warning: data contain duplicated points
Data on the kauri species can be extracted using its code:
kauri <- pts[rawdata$Species=='AGAU']
# density map
plot(density(kauri, sigma = 1.5),
main = "Kauri Density",
col = terrain.colors(100),
ribbon = TRUE)
# contour map
contour(density(kauri, sigma = 1.5),
main = "Contours",
nlevels = 6,
col = "darkred")
# combine
plot(density(kauri, sigma = 1.5),
main = "Kauri Density with Contours",
col = heat.colors(50))
contour(density(kauri, sigma = 1.5),
add = TRUE,
nlevels = 5,
col = "black")
WRITTEN PART STARTS
Based on these two maps, briefly describe the point pattern shown. Is it spatially random, even or clustered? Why?
Clustered
Density map: 1.1 High-density core area: The warmest color (such as 0.4 – 0.5) areas in the figure are concentrated in a small range, indicating that kauri forms high-density clusters locally. 1.2 The density gradient is obvious: from the core area outward, the color gradually becomes cold (0.1–0.3), indicating that the density is decreasing and conforming to the aggregation effect.
Contours map: 2.1 Closed contour lines: Multiple concentric closed loops (such as 0.3, 0.4, 0.5) surround the core area, further confirming that the density is reduced from the inside to the outside. 2.2 Multimodal structure: If multiple independent closed loops exist, it indicates that kauri is clustered in multiple local areas.
According to the combined density-contour plot, Kauri exhibits a significant aggregation distribution, manifested as multiple high-density core areas (0.4 – 0.5) and outwardly decreasing density gradients.
WRITTEN PART ENDS
Gest,Fest, Kest, and Lest are R functions able to perform standard distance-based point pattern analysis. The nature of the K-function makes it difficult to graphically discern differences between \(L_{theo}\) and \(\hat L\) at lower \(d\) values. A workaround is to transform the data which gives us the estimated L-function.
You can find more information about transformations between K and L functions below: http://personal.colby.edu/personal/m/mgimond/Spatial/K_and_L_functions.html
l = Gest(kauri)
k = Kest(kauri)
m = Lest(kauri)
n = Fest(kauri)
# Plot the results
plot(l, main="Gest Function for Kauri Trees")
plot(k, main="Kest Function for Kauri Trees")
plot(m, main="Lest Function for Kauri Trees")
plot(n, main="Fest Function for Kauri Trees")
Kenv = envelope(kauri, Kest, nsim=99, rank=1)
## Generating 99 simulations of CSR ...
## 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
## 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,
## 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60,
## 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80,
## 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98,
## 99.
##
## Done.
plot(Kenv)
Lenv = envelope(kauri, Lest, nsim=99, rank=1)
## Generating 99 simulations of CSR ...
## 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
## 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,
## 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60,
## 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80,
## 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98,
## 99.
##
## Done.
plot(Lenv)
Fenv = envelope(kauri, Fest, nsim=99, rank=1)
## Generating 99 simulations of CSR ...
## 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
## 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,
## 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60,
## 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80,
## 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98,
## 99.
##
## Done.
plot(Fenv)
This will produce a plot showing the limits of the envelope of simulated patterns (99 in this case, with the first highest and lowest cases, as specified by rank=1 plotted as hi and lo). Interpretation of the plot is the primary means of assessing the degree of clustering (aggregation) or evenness in the pattern.
WRITTEN PART STARTS
Kest function: This function measures the degree of point aggregation at different scales. If the K observation is higher than the theoretical value, it means that the Kauri tree tends to gather.
Lest function: This is Kest’s transformation, making the aggregation mode more obvious. L Observations above 0 indicate that the point pattern is clustered. L Observation below 0 indicates that the point mode is uniform. So it means it is gathered.
Fest function: This function measures the distance from a random point to the nearest observed point. If the observation value of F is less than the theoretical value, it means a uniform distribution (trees are spaced apart).
Kenv: This analysis provides comparison by simulating 99 CSR results (completely randomly distributed). Observations exceed envelope (especially above), and there is a significant aggregation pattern in the Kauri tree.
WRITTEN PART ENDS
Another method of point pattern analysis is the quadrat counting approach using quadratcount:
q = quadratcount(kauri)
plot(kauri)
plot(q, add=T)
quadrat.test(kauri)
##
## Chi-squared test of CSR using quadrat counts
##
## data: kauri
## X2 = 160.61, df = 24, p-value < 2.2e-16
## alternative hypothesis: two.sided
##
## Quadrats: 5 by 5 grid of tiles
WRITTEN PART STARTS
Chi-square statistics (X²): 160.61 - It shows that the observation frequency deviates significantly from the random distribution.
Degree of freedom (df): 24 (5×5 sample grid)
p-value: < 2.2×10⁻¹⁶ (far less than 0.001) - it shows that the Kauri distribution is non-random.
The sample square count distribution is extremely uneven, confirming the aggregation distribution pattern.
WRITTEN PART ENDS
You can use the code provided by Gimond in their Intro to GIS tutorial: https://mgimond.github.io/Spatial/interpolation-in-r.html
Read in two shapefiles: Park_boundary and POints_Park_DOmain. The first file is a border of the Auckland Domain whereas the second file is a file with sample locations with values of relative humidity, temperature and wind speed that were collected in the park in the last couple of years.
Hint: Use autofitVariogram in automap package to get psill, model and range values for all 3 variables autofitVariogram(Humidity~ X + Y,points_domain)
library(sf)
library(gstat)
##
## Attaching package: 'gstat'
## The following object is masked from 'package:spatstat.explore':
##
## idw
library(sp)
library(automap)
library(ggplot2)
library(tmap)
# Load Texas boundary map from local shapefile (Park_boundary.shp)
w <- st_read("~/Downloads/GEOG_DATA2/Park_boundary.shp")
## Reading layer `Park_boundary' from data source
## `/Users/llllllyh/Downloads/GEOG_DATA2/Park_boundary.shp' using driver `ESRI Shapefile'
## Simple feature collection with 1 feature and 4 fields
## Geometry type: POLYGON
## Dimension: XY
## Bounding box: xmin: 1757775 ymin: 5918673 xmax: 1758799 ymax: 5919896
## Projected CRS: NZGD2000 / New Zealand Transverse Mercator 2000
# Load precipitation data from local shapefile (Points_Park_Domain.shp)
p <- st_read("~/Downloads/GEOG_DATA2/Points_Park_Domain.shp")
## Reading layer `POints_Park_DOmain' from data source
## `/Users/llllllyh/Downloads/GEOG_DATA2/POints_Park_DOmain.shp'
## using driver `ESRI Shapefile'
## Simple feature collection with 35 features and 8 fields
## Geometry type: POINT
## Dimension: XY
## Bounding box: xmin: 1757911 ymin: 5918697 xmax: 1758662 ymax: 5919816
## Projected CRS: NZGD2000 / New Zealand Transverse Mercator 2000
# Humidity
tm_shape(w) + tm_polygons() +
tm_shape(p) +
tm_dots(col="Humidity", palette = "RdBu", auto.palette.mapping = FALSE,
title="Sampled precipitation \n(in inches)", size=0.7) +
tm_text("Humidity", just="left", xmod=.5, size = 0.7) +
tm_legend(legend.outside=TRUE)
##
## ── tmap v3 code detected ───────────────────────────────────────────────────────
## [v3->v4] `tm_tm_dots()`: migrate the argument(s) related to the scale of the
## visual variable `fill` namely 'palette' (rename to 'values') to fill.scale =
## tm_scale(<HERE>).
## [v3->v4] `tm_dots()`: use 'fill' for the fill color of polygons/symbols
## (instead of 'col'), and 'col' for the outlines (instead of 'border.col').
## [tm_dots()] Argument `title` unknown.
## [v3->v4] `tm_text()`: migrate the layer options 'just' to 'options =
## opt_tm_text(<HERE>)'
## [v3->v4] `tm_legend()`: use 'tm_legend()' inside a layer function, e.g.
## 'tm_polygons(..., fill.legend = tm_legend())'
## [cols4all] color palettes: use palettes from the R package cols4all. Run
## `cols4all::c4a_gui()` to explore them. The old palette name "RdBu" is named
## "brewer.rd_bu"
## Multiple palettes called "rd_bu" found: "brewer.rd_bu", "matplotlib.rd_bu". The first one, "brewer.rd_bu", is returned.
# Temperature
tm_shape(w) + tm_polygons() +
tm_shape(p) +
tm_dots(col="Temperatur", palette = "RdBu", auto.palette.mapping = FALSE,
title="Sampled precipitation \n(in inches)", size=0.7) +
tm_text("Temperatur", just="left", xmod=.5, size = 0.7) +
tm_legend(legend.outside=TRUE)
##
## ── tmap v3 code detected ───────────────────────────────────────────────────────
## [v3->v4] `tm_tm_dots()`: migrate the argument(s) related to the scale of the
## visual variable `fill` namely 'palette' (rename to 'values') to fill.scale =
## tm_scale(<HERE>).[tm_dots()] Argument `title` unknown.[v3->v4] `tm_text()`: migrate the layer options 'just' to 'options =
## opt_tm_text(<HERE>)'[cols4all] color palettes: use palettes from the R package cols4all. Run
## `cols4all::c4a_gui()` to explore them. The old palette name "RdBu" is named
## "brewer.rd_bu"Multiple palettes called "rd_bu" found: "brewer.rd_bu", "matplotlib.rd_bu". The first one, "brewer.rd_bu", is returned.
# Wind
tm_shape(w) + tm_polygons() +
tm_shape(p) +
tm_dots(col="Wind", palette = "RdBu", auto.palette.mapping = FALSE,
title="Sampled precipitation \n(in inches)", size=0.7) +
tm_text("Wind", just="left", xmod=.5, size = 0.7) +
tm_legend(legend.outside=TRUE)
##
## ── tmap v3 code detected ───────────────────────────────────────────────────────
## [v3->v4] `tm_tm_dots()`: migrate the argument(s) related to the scale of the
## visual variable `fill` namely 'palette' (rename to 'values') to fill.scale =
## tm_scale(<HERE>).[tm_dots()] Argument `title` unknown.[v3->v4] `tm_text()`: migrate the layer options 'just' to 'options =
## opt_tm_text(<HERE>)'[cols4all] color palettes: use palettes from the R package cols4all. Run
## `cols4all::c4a_gui()` to explore them. The old palette name "RdBu" is named
## "brewer.rd_bu"Multiple palettes called "rd_bu" found: "brewer.rd_bu", "matplotlib.rd_bu". The first one, "brewer.rd_bu", is returned.
library(spatstat) # Used for the dirichlet tessellation function
library(sf) # For spatial functions and data
library(tmap) # For mapping
# Create a point pattern object from your points in 'w'
p_ppp <- as.ppp(p) # Convert the point data to a point pattern object
# Create a tessellated surface using Dirichlet tessellation
th <- dirichlet(p_ppp) |> st_as_sfc() |> st_as_sf()
# The dirichlet function does not carry over projection information,
# so we need to manually set the CRS from 'p' (assuming p is projected)
st_crs(th) <- st_crs(p)
# The tessellated surface does not store attribute information,
# so you can join the point attributes from 'w' (e.g., Precip_in) to the tessellated polygons
# Make sure to replace 'Precip_in' with the actual variable name from your data
th2 <- st_join(th, p, fn = mean)
# Finally, we'll clip the tessellated surface to the Texas boundary ('p')
th.clp <- st_intersection(th2, w)
## Warning: attribute variables are assumed to be spatially constant throughout
## all geometries
# Humidity
tm_shape(th.clp) +
tm_polygons(col="Humidity", palette = "RdBu", auto.palette.mapping = FALSE,
title="Humidity") +
tm_legend(legend.outside=TRUE)
##
## ── tmap v3 code detected ───────────────────────────────────────────────────────
## [v3->v4] `tm_tm_polygons()`: migrate the argument(s) related to the scale of
## the visual variable `fill` namely 'palette' (rename to 'values') to fill.scale
## = tm_scale(<HERE>).
## [v3->v4] `tm_polygons()`: migrate the argument(s) related to the legend of the
## visual variable `fill` namely 'title' to 'fill.legend = tm_legend(<HERE>)'
## [cols4all] color palettes: use palettes from the R package cols4all. Run
## `cols4all::c4a_gui()` to explore them. The old palette name "RdBu" is named
## "brewer.rd_bu"
## Multiple palettes called "rd_bu" found: "brewer.rd_bu", "matplotlib.rd_bu". The first one, "brewer.rd_bu", is returned.
# Temperature
tm_shape(th.clp) +
tm_polygons(col="Temperatur", palette = "RdBu", auto.palette.mapping = FALSE,
title="Temperature") +
tm_legend(legend.outside=TRUE)
##
## ── tmap v3 code detected ───────────────────────────────────────────────────────
## [v3->v4] `tm_tm_polygons()`: migrate the argument(s) related to the scale of
## the visual variable `fill` namely 'palette' (rename to 'values') to fill.scale
## = tm_scale(<HERE>).[v3->v4] `tm_polygons()`: migrate the argument(s) related to the legend of the
## visual variable `fill` namely 'title' to 'fill.legend = tm_legend(<HERE>)'[cols4all] color palettes: use palettes from the R package cols4all. Run
## `cols4all::c4a_gui()` to explore them. The old palette name "RdBu" is named
## "brewer.rd_bu"Multiple palettes called "rd_bu" found: "brewer.rd_bu", "matplotlib.rd_bu". The first one, "brewer.rd_bu", is returned.
# Wind
tm_shape(th.clp) +
tm_polygons(col="Wind", palette = "RdBu", auto.palette.mapping = FALSE,
title="Wind") +
tm_legend(legend.outside=TRUE)
##
## ── tmap v3 code detected ───────────────────────────────────────────────────────
## [v3->v4] `tm_tm_polygons()`: migrate the argument(s) related to the scale of
## the visual variable `fill` namely 'palette' (rename to 'values') to fill.scale
## = tm_scale(<HERE>).[v3->v4] `tm_polygons()`: migrate the argument(s) related to the legend of the
## visual variable `fill` namely 'title' to 'fill.legend = tm_legend(<HERE>)'[cols4all] color palettes: use palettes from the R package cols4all. Run
## `cols4all::c4a_gui()` to explore them. The old palette name "RdBu" is named
## "brewer.rd_bu"Multiple palettes called "rd_bu" found: "brewer.rd_bu", "matplotlib.rd_bu". The first one, "brewer.rd_bu", is returned.
library(sf)
library(sp)
library(gstat)
library(terra)
## terra 1.7.78
##
## Attaching package: 'terra'
## The following objects are masked from 'package:spatstat.geom':
##
## area, delaunay, is.empty, rescale, rotate, shift, where.max,
## where.min
p_sp <- as(p, "Spatial")
w_sp <- as(w, "Spatial")
crs(p_sp) <- crs(w)
# Create an empty grid where n is the total number of cells
grd <- as.data.frame(spsample(p_sp, "regular", n=50000))
names(grd) <- c("X", "Y")
coordinates(grd) <- c("X", "Y")
gridded(grd) <- TRUE # Create SpatialPixel object
fullgrid(grd) <- TRUE # Create SpatialGrid object
# Add P's projection information to the empty grid
crs(grd) <- crs(p_sp)
# Humidity
# Interpolate the grid cells using a power value of 2 (idp=2.0)
p.idw <- idw(Humidity ~ 1, p_sp, newdata=grd, idp = 2.0)
## [inverse distance weighted interpolation]
# Convert to raster object then clip to Texas
r <- rast(p.idw)
r.m <- mask(r, st_as_sf(w))
# Plot
tm_shape(r.m["var1.pred"]) +
tm_raster(n=10, palette = "RdBu", auto.palette.mapping = FALSE,
title="Humidity") +
tm_shape(p) + tm_dots(size=0.2) +
tm_legend(legend.outside=TRUE)
##
## ── tmap v3 code detected ───────────────────────────────────────────────────────
## [v3->v4] `tm_tm_raster()`: migrate the argument(s) related to the scale of the
## visual variable `col` namely 'n', 'palette' (rename to 'values') to col.scale =
## tm_scale(<HERE>).
## [v3->v4] `tm_raster()`: migrate the argument(s) related to the legend of the
## visual variable `col` namely 'title' to 'col.legend = tm_legend(<HERE>)'
## [cols4all] color palettes: use palettes from the R package cols4all. Run
## `cols4all::c4a_gui()` to explore them. The old palette name "RdBu" is named
## "brewer.rd_bu"
# Temperature
# Interpolate the grid cells using a power value of 2 (idp=2.0)
p.idw <- idw(Temperatur ~ 1, p_sp, newdata=grd, idp = 2.0)
## [inverse distance weighted interpolation]
# Convert to raster object then clip to Texas
r <- rast(p.idw)
r.m <- mask(r, st_as_sf(w))
# Plot
tm_shape(r.m["var1.pred"]) +
tm_raster(n=10, palette = "RdBu", auto.palette.mapping = FALSE,
title="Temperature") +
tm_shape(p) + tm_dots(size=0.2) +
tm_legend(legend.outside=TRUE)
##
## ── tmap v3 code detected ───────────────────────────────────────────────────────
## [v3->v4] `tm_tm_raster()`: migrate the argument(s) related to the scale of the
## visual variable `col` namely 'n', 'palette' (rename to 'values') to col.scale =
## tm_scale(<HERE>).[v3->v4] `tm_raster()`: migrate the argument(s) related to the legend of the
## visual variable `col` namely 'title' to 'col.legend = tm_legend(<HERE>)'[cols4all] color palettes: use palettes from the R package cols4all. Run
## `cols4all::c4a_gui()` to explore them. The old palette name "RdBu" is named
## "brewer.rd_bu"
# Wind
# Interpolate the grid cells using a power value of 2 (idp=2.0)
p.idw <- idw(Wind ~ 1, p_sp, newdata=grd, idp = 2.0)
## [inverse distance weighted interpolation]
# Convert to raster object then clip to Texas
r <- rast(p.idw)
r.m <- mask(r, st_as_sf(w))
# Plot
tm_shape(r.m["var1.pred"]) +
tm_raster(n=10, palette = "RdBu", auto.palette.mapping = FALSE,
title="Wind") +
tm_shape(p) + tm_dots(size=0.2) +
tm_legend(legend.outside=TRUE)
##
## ── tmap v3 code detected ───────────────────────────────────────────────────────
## [v3->v4] `tm_tm_raster()`: migrate the argument(s) related to the scale of the
## visual variable `col` namely 'n', 'palette' (rename to 'values') to col.scale =
## tm_scale(<HERE>).[v3->v4] `tm_raster()`: migrate the argument(s) related to the legend of the
## visual variable `col` namely 'title' to 'col.legend = tm_legend(<HERE>)'[cols4all] color palettes: use palettes from the R package cols4all. Run
## `cols4all::c4a_gui()` to explore them. The old palette name "RdBu" is named
## "brewer.rd_bu"
# Define the 1st order polynomial equation
f.1 <- as.formula(Humidity ~ x + y)
# Compute the sample variogram; note that the f.1 trend model is one of the
# parameters passed to variogram(). This tells the function to create the
# variogram on the de-trended data.
var.smpl <- variogram(f.1, p, cloud = FALSE, cutoff=1000000, width=89900)
# Compute the variogram model by passing the nugget, sill and range values
# to fit.variogram() via the vgm() function.
dat.fit <- fit.variogram(var.smpl, fit.ranges = FALSE, fit.sills = FALSE,
vgm(psill=14, model="Sph", range=590000, nugget=1))
# The following plot allows us to assess the fit
plot(var.smpl, dat.fit, xlim=c(0,500))
library(sp)
library(automap)
library(tmap)
# Humidity
# use autofitVariogram
varfit_hum <- autofitVariogram(Humidity ~ 1, p_sp)
# Perform the krige interpolation
kriging_hum <- krige(Humidity ~ 1, p_sp, grd, model = varfit_hum$var_model)
## [using ordinary kriging]
# Convert kriged surface to a raster object for clipping
r.hum.krg <- rast(kriging_hum)
r.hum.krg.m <- mask(r.hum.krg, st_as_sf(w))
# plot
tm_shape(r.hum.krg.m["var1.pred"]) +
tm_raster(n=10, palette="RdBu", title="Humidity") +
tm_shape(p) + tm_dots(size=0.2) +
tm_legend(legend.outside=TRUE)
##
## ── tmap v3 code detected ───────────────────────────────────────────────────────
## [v3->v4] `tm_tm_raster()`: migrate the argument(s) related to the scale of the
## visual variable `col` namely 'n', 'palette' (rename to 'values') to col.scale =
## tm_scale(<HERE>).
## [v3->v4] `tm_raster()`: migrate the argument(s) related to the legend of the
## visual variable `col` namely 'title' to 'col.legend = tm_legend(<HERE>)'
## [cols4all] color palettes: use palettes from the R package cols4all. Run
## `cols4all::c4a_gui()` to explore them. The old palette name "RdBu" is named
## "brewer.rd_bu"
# Temperture
# use autofitVariogram
varfit_temp <- autofitVariogram(Temperatur ~ 1, p_sp)
# Perform the krige interpolation
kriging_temp <- krige(Temperatur ~ 1, p_sp, grd, model = varfit_temp$var_model)
## [using ordinary kriging]
# Convert kriged surface to a raster object for clipping
r.temp.krg <- rast(kriging_temp)
r.temp.krg.m <- mask(r.temp.krg, st_as_sf(w))
# plot
tm_shape(r.temp.krg.m["var1.pred"]) +
tm_raster(n=10, palette="RdBu", title="Temperature") +
tm_shape(p) + tm_dots(size=0.2) +
tm_legend(legend.outside=TRUE)
##
## ── tmap v3 code detected ───────────────────────────────────────────────────────
## [v3->v4] `tm_tm_raster()`: migrate the argument(s) related to the scale of the
## visual variable `col` namely 'n', 'palette' (rename to 'values') to col.scale =
## tm_scale(<HERE>).[v3->v4] `tm_raster()`: migrate the argument(s) related to the legend of the
## visual variable `col` namely 'title' to 'col.legend = tm_legend(<HERE>)'[cols4all] color palettes: use palettes from the R package cols4all. Run
## `cols4all::c4a_gui()` to explore them. The old palette name "RdBu" is named
## "brewer.rd_bu"
# Wind
# use autofitVariogram
varfit_wind <- autofitVariogram(Wind ~ 1, p_sp)
# Perform the krige interpolation
kriging_wind <- krige(Wind ~ 1, p_sp, grd, model = varfit_wind$var_model)
## [using ordinary kriging]
# Convert kriged surface to a raster object for clipping
r.wind.krg <- rast(kriging_wind)
r.wind.krg.m <- mask(r.wind.krg, st_as_sf(w))
# plot
tm_shape(r.wind.krg.m["var1.pred"]) +
tm_raster(n=10, palette="RdBu", title="Wind") +
tm_shape(p) + tm_dots(size=0.2) +
tm_legend(legend.outside=TRUE)
##
## ── tmap v3 code detected ───────────────────────────────────────────────────────
## [v3->v4] `tm_tm_raster()`: migrate the argument(s) related to the scale of the
## visual variable `col` namely 'n', 'palette' (rename to 'values') to col.scale =
## tm_scale(<HERE>).[v3->v4] `tm_raster()`: migrate the argument(s) related to the legend of the
## visual variable `col` namely 'title' to 'col.legend = tm_legend(<HERE>)'[cols4all] color palettes: use palettes from the R package cols4all. Run
## `cols4all::c4a_gui()` to explore them. The old palette name "RdBu" is named
## "brewer.rd_bu"[scale] tm_raster:() the data variable assigned to 'col' contains positive and negative values, so midpoint is set to 0. Set 'midpoint = NA' in 'fill.scale = tm_scale_intervals(<HERE>)' to use all visual values (e.g. colors)
tm_shape(r.hum.krg.m) +
tm_raster(n=7, palette="Reds", title="Humidity") +
tm_legend(legend.outside=TRUE)
##
## ── tmap v3 code detected ───────────────────────────────────────────────────────
## [v3->v4] `tm_tm_raster()`: migrate the argument(s) related to the scale of the
## visual variable `col` namely 'n', 'palette' (rename to 'values') to col.scale =
## tm_scale(<HERE>).
## [v3->v4] `tm_raster()`: migrate the argument(s) related to the legend of the
## visual variable `col` namely 'title' to 'col.legend = tm_legend(<HERE>)'
## [cols4all] color palettes: use palettes from the R package cols4all. Run
## `cols4all::c4a_gui()` to explore them. The old palette name "Reds" is named
## "brewer.reds"
## [plot mode] fit legend/component: Some legend items or map compoments do not
## fit well, and are therefore rescaled.
## ℹ Set the tmap option `component.autoscale = FALSE` to disable rescaling.
tm_shape(r.temp.krg.m) +
tm_raster(n=7, palette="Reds", title="Temperature") +
tm_legend(legend.outside=TRUE)
##
## ── tmap v3 code detected ───────────────────────────────────────────────────────
## [v3->v4] `tm_tm_raster()`: migrate the argument(s) related to the scale of the
## visual variable `col` namely 'n', 'palette' (rename to 'values') to col.scale =
## tm_scale(<HERE>).[v3->v4] `tm_raster()`: migrate the argument(s) related to the legend of the
## visual variable `col` namely 'title' to 'col.legend = tm_legend(<HERE>)'[cols4all] color palettes: use palettes from the R package cols4all. Run
## `cols4all::c4a_gui()` to explore them. The old palette name "Reds" is named
## "brewer.reds"
tm_shape(r.wind.krg.m) +
tm_raster(n=7, palette="Reds", title="Wind") +
tm_legend(legend.outside=TRUE)
##
## ── tmap v3 code detected ───────────────────────────────────────────────────────
## [v3->v4] `tm_tm_raster()`: migrate the argument(s) related to the scale of the
## visual variable `col` namely 'n', 'palette' (rename to 'values') to col.scale =
## tm_scale(<HERE>).[v3->v4] `tm_raster()`: migrate the argument(s) related to the legend of the
## visual variable `col` namely 'title' to 'col.legend = tm_legend(<HERE>)'[cols4all] color palettes: use palettes from the R package cols4all. Run
## `cols4all::c4a_gui()` to explore them. The old palette name "Reds" is named
## "brewer.reds"[plot mode] fit legend/component: Some legend items or map compoments do not
## fit well, and are therefore rescaled.
## ℹ Set the tmap option `component.autoscale = FALSE` to disable rescaling.
# Humidity
r1 <- rast(kriging_hum)
r.m1 <- mask(sqrt(r1["var1.var"])* 1.96, st_as_sf(w))
tm_shape(r.m1) +
tm_raster(n=7, palette ="Reds",
title="95% CI map \n(Humidity in %)") +tm_shape(p) + tm_dots(size=0.2) +
tm_legend(legend.outside=TRUE)
##
## ── tmap v3 code detected ───────────────────────────────────────────────────────
## [v3->v4] `tm_tm_raster()`: migrate the argument(s) related to the scale of the
## visual variable `col` namely 'n', 'palette' (rename to 'values') to col.scale =
## tm_scale(<HERE>).
## [v3->v4] `tm_raster()`: migrate the argument(s) related to the legend of the
## visual variable `col` namely 'title' to 'col.legend = tm_legend(<HERE>)'
## [cols4all] color palettes: use palettes from the R package cols4all. Run
## `cols4all::c4a_gui()` to explore them. The old palette name "Reds" is named
## "brewer.reds"
# Temperature
r2 <- rast(kriging_temp)
r.m2 <- mask(sqrt(r2["var1.var"])* 1.96, st_as_sf(w))
tm_shape(r.m2) +
tm_raster(n=7, palette ="Reds",
title="95% CI map \n(Temperature in C)") +tm_shape(p) + tm_dots(size=0.2) +
tm_legend(legend.outside=TRUE)
##
## ── tmap v3 code detected ───────────────────────────────────────────────────────
## [v3->v4] `tm_tm_raster()`: migrate the argument(s) related to the scale of the
## visual variable `col` namely 'n', 'palette' (rename to 'values') to col.scale =
## tm_scale(<HERE>).[v3->v4] `tm_raster()`: migrate the argument(s) related to the legend of the
## visual variable `col` namely 'title' to 'col.legend = tm_legend(<HERE>)'[cols4all] color palettes: use palettes from the R package cols4all. Run
## `cols4all::c4a_gui()` to explore them. The old palette name "Reds" is named
## "brewer.reds"
# Wind
r3 <- rast(kriging_hum)
r.m3 <- mask(sqrt(r3["var1.var"])* 1.96, st_as_sf(w))
tm_shape(r.m3) +
tm_raster(n=7, palette ="Reds",
title="95% CI map \n(Wind in m/s)") +tm_shape(p) + tm_dots(size=0.2) +
tm_legend(legend.outside=TRUE)
##
## ── tmap v3 code detected ───────────────────────────────────────────────────────
## [v3->v4] `tm_tm_raster()`: migrate the argument(s) related to the scale of the
## visual variable `col` namely 'n', 'palette' (rename to 'values') to col.scale =
## tm_scale(<HERE>).[v3->v4] `tm_raster()`: migrate the argument(s) related to the legend of the
## visual variable `col` namely 'title' to 'col.legend = tm_legend(<HERE>)'[cols4all] color palettes: use palettes from the R package cols4all. Run
## `cols4all::c4a_gui()` to explore them. The old palette name "Reds" is named
## "brewer.reds"
WRITTEN PART STARTS
Voronoi: - Divide a polygonal area for each observation point - The results are distributed in a step-like manner, with discontinuous boundaries
IDW: - Calculate the value of unknown points by weighted average of surrounding observation points - Generate smooth continuous surfaces - Error estimation cannot be provided
Kriging: - Statistical method based on spatial autocorrelation - Consider not only distance, but also the spatial distribution pattern of data - Provide optimal unbiased estimation - The calculation is more complicated and requires fitting the variogram model
Key differences: 1. Complexity: Voronoi < IDW < Kriging 2. Result accuracy: Voronoi is the lowest and Kriging is the highest 3. Additional information: Only Kriging can provide error estimation 4. Calculation speed: Voronoi is the fastest, Kriging is the slowest 5. Applicability: Voronoi is suitable for discrete data, IDW is suitable for simple and smooth, Kriging is suitable for accurate analysis
WRITTEN PART ENDS